28 research outputs found

    Contrastive Hebbian Learning with Random Feedback Weights

    Full text link
    Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning

    Accidental Learners: Spoken Language Identification in Multilingual Self-Supervised Models

    Full text link
    In this paper, we extend previous self-supervised approaches for language identification by experimenting with Conformer based architecture in a multilingual pre-training paradigm. We find that pre-trained speech models optimally encode language discriminatory information in lower layers. Further, we demonstrate that the embeddings obtained from these layers are significantly robust to classify unseen languages and different acoustic environments without additional training. After fine-tuning a pre-trained Conformer model on the VoxLingua107 dataset, we achieve results similar to current state-of-the-art systems for language identification. More, our model accomplishes this with 5x less parameters. We open-source the model through the NVIDIA NeMo toolkit.Comment: Submitted to ICASSP 202

    A Chat About Boring Problems: Studying GPT-based text normalization

    Full text link
    Text normalization - the conversion of text from written to spoken form - is traditionally assumed to be an ill-formed task for language models. In this work, we argue otherwise. We empirically show the capacity of Large-Language Models (LLM) for text normalization in few-shot scenarios. Combining self-consistency reasoning with linguistic-informed prompt engineering, we find LLM based text normalization to achieve error rates around 40\% lower than top normalization systems. Further, upon error analysis, we note key limitations in the conventional design of text normalization tasks. We create a new taxonomy of text normalization errors and apply it to results from GPT-3.5-Turbo and GPT-4.0. Through this new framework, we can identify strengths and weaknesses of GPT-based TN, opening opportunities for future work

    Results of the Second SIGMORPHON Shared Task on Multilingual Grapheme-to-Phoneme Conversion

    Full text link
    Grapheme-to-phoneme conversion is an important component in many speech technologies, but until recently there were no multilingual benchmarks for this task. The second iteration of the SIGMORPHON shared task on multilingual grapheme-to-phoneme conversion features many improvements from the previous year's task (Gorman et al. 2020), including additional languages, a stronger baseline, three subtasks varying the amount of available resources, extensive quality assurance procedures, and automated error analyses. Four teams submitted a total of thirteen systems, at best achieving relative reductions of word error rate of 11% in the high-resource subtask and 4% in the low-resource subtask

    A Field Guide to Finding Fossils on Mars

    Get PDF
    The Martian surface is cold, dry, exposed to biologically harmful radiation and apparently barren today. Nevertheless, there is clear geological evidence for warmer, wetter intervals in the past that could have supported life at or near the surface. This evidence has motivated National Aeronautics and Space Administration and European Space Agency to prioritize the search for any remains or traces of organisms from early Mars in forthcoming missions. Informed by (1) stratigraphic, mineralogical and geochemical data collected by previous and current missions, (2) Earth's fossil record, and (3) experimental studies of organic decay and preservation, we here consider whether, how, and where fossils and isotopic biosignatures could have been preserved in the depositional environments and mineralizing media thought to have been present in habitable settings on early Mars. We conclude that Noachian‐Hesperian Fe‐bearing clay‐rich fluvio‐lacustrine siliciclastic deposits, especially where enriched in silica, currently represent the most promising and best understood astropaleontological targets. Siliceous sinters would also be an excellent target, but their presence on Mars awaits confirmation. More work is needed to improve our understanding of fossil preservation in the context of other environments specific to Mars, particularly within evaporative salts and pore/fracture‐filling subsurface minerals

    Accelerated surgery versus standard care in hip fracture (HIP ATTACK): an international, randomised, controlled trial

    Get PDF

    The Remediated Bakhtin: Heteroglossia and the New Media

    No full text
    corecore